5,126 research outputs found

    Splitting electronic spins with a Kondo double dot device

    Full text link
    We present a simple device made of two small capacitively coupled quantum dots in parallel. This set-up can be used as an efficient "Stern-Gerlach" spin filter, able to simultaneously produce, from a normal metallic lead, two oppositely spin-polarized currents when submitted to a local magnetic field. Our proposal is based on the realization of a Kondo effect where spin and orbital degrees of freedom are entangled, allowing a spatial separation between the two spin polarized currents. In the low temperature Kondo regime, the efficiency is very high and the device conductance reaches the unitary limit, e2h\frac{e^2}{h} per spin branch.Comment: 3 pages, 2 figure

    Finite-Size Scaling of Charge Carrier Mobility in Disordered Organic Semiconductors

    Full text link
    Simulations of charge transport in amorphous semiconductors are often performed in microscopically sized systems. As a result, charge carrier mobilities become system-size dependent. We propose a simple method for extrapolating a macroscopic, nondispersive mobility from the system-size dependence of a microscopic one. The method is validated against a temperature-based extrapolation [Phys. Rev. B 82, 193202 (2010)]. In addition, we provide an analytic estimate of system sizes required to perform nondispersive charge transport simulations in systems with finite charge carrier density, derived from a truncated Gaussian distribution. This estimate is not limited to lattice models or specific rate expressions

    Logic Meets Algebra: the Case of Regular Languages

    Full text link
    The study of finite automata and regular languages is a privileged meeting point of algebra and logic. Since the work of Buchi, regular languages have been classified according to their descriptive complexity, i.e. the type of logical formalism required to define them. The algebraic point of view on automata is an essential complement of this classification: by providing alternative, algebraic characterizations for the classes, it often yields the only opportunity for the design of algorithms that decide expressibility in some logical fragment. We survey the existing results relating the expressibility of regular languages in logical fragments of MSO[S] with algebraic properties of their minimal automata. In particular, we show that many of the best known results in this area share the same underlying mechanics and rely on a very strong relation between logical substitutions and block-products of pseudovarieties of monoid. We also explain the impact of these connections on circuit complexity theory.Comment: 37 page

    Learning Recursive Segments for Discourse Parsing

    Full text link
    Automatically detecting discourse segments is an important preliminary step towards full discourse parsing. Previous research on discourse segmentation have relied on the assumption that elementary discourse units (EDUs) in a document always form a linear sequence (i.e., they can never be nested). Unfortunately, this assumption turns out to be too strong, for some theories of discourse like SDRT allows for nested discourse units. In this paper, we present a simple approach to discourse segmentation that is able to produce nested EDUs. Our approach builds on standard multi-class classification techniques combined with a simple repairing heuristic that enforces global coherence. Our system was developed and evaluated on the first round of annotations provided by the French Annodis project (an ongoing effort to create a discourse bank for French). Cross-validated on only 47 documents (1,445 EDUs), our system achieves encouraging performance results with an F-score of 73% for finding EDUs.Comment: published at LREC 201

    Identification automatique des relations discursives "implicites" à partir de données annotées et de corpus bruts

    Get PDF
    National audienceThis paper presents a system for identifying \og implicit\fg discourse relations (that is, relations that are not marked by a discourse connective). Given the little amount of available annotated data for this task, our system also resorts to additional automatically labeled data wherein unambiguous connectives have been suppressed and used as relation labels, a method introduced by [Marcu & Echihabi 2002]. As shown by [Sporleder & Lascarides 2008] for English, this approach doesn't generalize well to implicit relations as annotated by humans. We show that the same conclusion applies to French due to important distribution differences between the two types of data. In consequence, we propose various simple methods, all inspired from work on domain adaptation, with the aim of better combining annotated data and artificial data. We evaluate these methods through various experiments carried out on the ANNODIS corpus: our best system reaches a labeling accuracy of 45.6%, corresponding to a 5.9% significant gain over a system solely trained on manually labeled data.Cet article présente un système d'identification des relations discursives dites "implicites" (à savoir, non explicitement marquées par un connecteur) pour le français. Etant donné le faible volume de données annotées disponibles, notre système s'appuie sur des données étiquetées automatiquement en supprimant les connecteurs non ambigus pris comme annotation d'une relation, une méthode introduite par [Marcu & Echihabi 2002]. Comme l'ont montré [Sporleder & Lascarides 2008] pour l'anglais, cette approche ne généralise pas très bien aux exemples de relations implicites tels qu'annotés par des humains. Nous arrivons au même constat pour le français et, partant du principe que le problème vient d'une différence de distribution entre les deux types de données, nous proposons une série de méthodes assez simples, inspirées par l'adaptation de domaine, qui visent à combiner efficacement données annotées et données artificielles. Nous évaluons empiriquement les différentes approches sur le corpus ANNODIS : nos meilleurs résultats sont de l'ordre de 45.6% d'exactitude, avec un gain significatif de 5.9% par rapport à un système n'utilisant que les données annotées manuellement

    Joint Anaphoricity Detection and Coreference Resolution with Constrained Latent Structures

    Get PDF
    International audienceThis paper introduces a new structured model for learninganaphoricity detection and coreference resolution in a jointfashion. Specifically, we use a latent tree to represent the fullcoreference and anaphoric structure of a document at a globallevel, and we jointly learn the parameters of the two modelsusing a version of the structured perceptron algorithm.Our joint structured model is further refined by the use ofpairwise constraints which help the model to capture accuratelycertain patterns of coreference. Our experiments on theCoNLL-2012 English datasets show large improvements inboth coreference resolution and anaphoricity detection, comparedto various competing architectures. Our best coreferencesystem obtains a CoNLL score of 81:97 on gold mentions,which is to date the best score reported on this setting

    Predicting globally-coherent temporal structures from texts via endpoint inference and graph decomposition

    Get PDF
    International audienceAn elegant approach to learning temporal order- ings from texts is to formulate this problem as a constraint optimization problem, which can be then given an exact solution using Integer Linear Programming. This works well for cases where the number of possible relations between temporal entities is restricted to the mere precedence rela- tion [Bramsen et al., 2006; Chambers and Jurafsky, 2008], but becomes impractical when considering all possible interval relations. This paper proposes two innovations, inspired from work on temporal reasoning, that control this combinatorial blow-up, therefore rendering an exact ILP inference viable in the general case. First, we translate our network of constraints from temporal intervals to their end- points, to handle a drastically smaller set of con- straints, while preserving the same temporal infor- mation. Second, we show that additional efficiency is gained by enforcing coherence on particular sub- sets of the entire temporal graphs. We evaluate these innovations through various experiments on TimeBank 1.2, and compare our ILP formulations with various baselines and oracle systems

    Combining Natural and Artificial Examples to Improve Implicit Discourse Relation Identification

    Get PDF
    International audienceThis paper presents the first experiments on identifying implicit discourse relations (i.e., relations lacking an overt discourse connective) in French. Given the little amount of annotated data for this task, our system resorts to additional data automatically labeled using unambiguous connectives, a method introduced by (Marcu and Echihabi, 2002). We first show that a system trained solely on these artificial data does not generalize well to natural implicit examples, thus echoing the conclusion made by (Sporleder and Lascarides, 2008) for English. We then explain these initial results by analyzing the different types of distribution difference between natural and artificial implicit data. This finally leads us to propose a number of very simple methods, all inspired from work on domain adaptation, for combining the two types of data. Through various experiments on the French ANNODIS corpus, we show that our best system achieves an accuracy of 41.7%, corresponding to a 4.4% significant gain over a system solely trained on manually labeled data

    Apprentissage d'une hiérarchie de modèles à paires spécialisés pour la résolution de la coréférence

    Get PDF
    National audienceNous proposons une nouvelle méthode pour améliorer significativement la performance des modèles à paires de mentions pour la résolution de la coréférence. Étant donné un ensemble d'indicateurs, notre méthode apprend à séparer au mieux des types de paires de mentions en classes d'équivalence, chacune de celles-ci donnant lieu à un modèle de classification spécifique. La procédure algorithmique proposée trouve le meilleur espace de traits (créé à partir de combinaisons de traits élémentaires et d'indicateurs) pour discriminer les paires de mentions coréférentielles. Bien que notre approche explore un très vaste ensemble d'espaces de trait, elle reste efficace en exploitant la structure des hiérarchies construites à partir des indicateurs. Nos expériences sur les données anglaises de la CoNLL-2012 Shared Task indiquent que notre méthode donne des gains de performance par rapport au modèle initial utilisant seulement les traits élémentaires, et ce, quelque soit la méthode de formation des chaînes ou la métrique d'évaluation choisie. Notre meilleur système obtient une moyenne de 67.2 en F1-mesure MUC, B3 et CEAF ce qui, malgré sa simplicité, le situe parmi les meilleurs systèmes testés sur ces données

    Comparison of different algebras for inducing the temporal structure of texts

    Get PDF
    International audienceThis paper investigates the impact of using different temporal algebras for learning temporal relations between events. Specifically, we compare three interval-based algebras: Allen \shortcite{Allen83} algebra, Bruce \shortcite{Bruce72} algebra, and the algebra derived from the TempEval-07 campaign. These algebras encode different granularities of relations and have different inferential properties. They in turn behave differently when used to enforce global consistency constraints on the building of a temporal representation. Through various experiments on the TimeBank/AQUAINT corpus, we show that although the TempEval relation set leads to the best classification accuracy performance, it is too vague to be used for enforcing consistency. By contrast, the other two relation sets are similarly harder to learn, but more useful when global consistency is important. Overall, the Bruce algebra is shown to give the best compromise between learnability and expressive power
    • …
    corecore